Transición al Visión por Computadora
Hoy pasamos de manejar datos simples y estructurados con capas lineales básicas a enfrentar datos de imagen de alta dimensión. Una sola imagen en color introduce una complejidad significativa que las arquitecturas estándar no pueden manejar de forma eficiente. El aprendizaje profundo para la visión requiere un enfoque especializado: la Red Neuronal Convolucional (CNN).
1. ¿Por qué fallan las Redes Neuronales Completamente Conectadas (FCNs)?
En una FCN, cada píxel de entrada debe conectarse con cada neurona de la capa siguiente. Para imágenes de alta resolución, esto provoca una explosión computacional, haciendo el entrenamiento inviable y la generalización deficiente debido a un sobreajuste extremo.
- Input Dimension: A standard $224 \times 224$ RGB image results in $150,528$ input features ($224 \times 224 \times 3$).
- Hidden Layer Size: If the first hidden layer uses 1,024 neurons.
- Total Parameters (Layer 1): $\approx 154$ million weights ($150,528 \times 1024$) just for the first connection block, requiring massive memory and compute time.
La Solución CNN
Las CNNs resuelven el problema de escalabilidad de las FCNs aprovechando la estructura espacial de las imágenes. Identifican patrones (como bordes o curvas) usando filtros pequeños, reduciendo el número de parámetros en órdenes de magnitud y promoviendo robustez.
TERMINALbash — model-env
> Ready. Click "Run" to execute.
>
PARAMETER EFFICIENCY INSPECTOR Live
Run comparison to visualize parameter counts.
Question 1
What is the primary benefit of using Local Receptive Fields in CNNs?
Question 2
If a $3 \times 3$ filter is applied across an entire image, what core CNN concept is being utilized?
Question 3
Which CNN component is responsible for progressively reducing the spatial dimensions (width and height) of the feature maps?
Challenge: Identifying Key CNN Components
Relate CNN mechanisms to their functional benefits.
We need to build a vision model that is highly parameter efficient and can recognize an object even if it slightly shifts its position in the image.
Step 1
Which mechanism ensures the network can identify a feature (like a diagonal line) regardless of where it is in the frame?
Solution:
Shared Weights. By using the same filter across all locations, the network learns translation invariance.
Shared Weights. By using the same filter across all locations, the network learns translation invariance.
Step 2
What architectural choice allows a CNN to detect features with fewer parameters than an FCN?
Solution:
Local Receptive Fields (or Sparse Connectivity). Instead of connecting to every pixel, each neuron only connects to a small, localized region of the input.
Local Receptive Fields (or Sparse Connectivity). Instead of connecting to every pixel, each neuron only connects to a small, localized region of the input.
Step 3
How does the CNN structure lead to hierarchical feature learning (e.g., edges $\to$ corners $\to$ objects)?
Solution:
Stacked Layers. Early layers learn simple features (edges) using convolution. Deeper layers combine the outputs of earlier layers to form complex, abstract features (objects).
Stacked Layers. Early layers learn simple features (edges) using convolution. Deeper layers combine the outputs of earlier layers to form complex, abstract features (objects).